-
Notifications
You must be signed in to change notification settings - Fork 2
Tablefy Output and Add Demo CI Workflow #7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
robshakir
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One minor change, but this LGTM.
I assume that the 'fail' output is just a demo that will be reverted before submission?
The demo is allowed to fail. This is because patterns must be added prior to being used in |
|
I'm confused -- if the demo is allowed to fail, then it probably should not be something that we're running in the checks that are to be run for a PR. If we're doing a negative test here, then make the "success" of the check be that the test is failing - that way you can still check that the failure is caught correctly, but not end up with the scenario where it looks like checks are failing for the PR. |
OK that makes sense to me, I've made the run passing now. It will still be part of the release process to make sure that with a breaking new test, that the minor version must be incremented. If unsure, simply increment the minor version and update in the model repos. |
|
@aashaikh Adding you for review of the "Demo" CI checks. If you open the one called This is the workflow I'm proposing for adding new tests/changing tests for existing regexes. I haven't thought about how to deal with adding new regexes yet. I think Rob is largely ok with the proposal, but want to get your double check here. |
|
I'll assume that it's ok that the "Demo" CI steps pass despite regex failures. We can change this if this turns out to not be desired. I think the exact output from "Demo" that we're expecting has to be manually checked, so an improvement would be to post a comment or a link to a gist on what the failure messages are. I've opened #8 to track that. |
The demo workflow is not required to pass -- its purpose is to show what the test results would be if it were run on the current public models.
Example failing (table) output: https://gist.github.com/OpenConfigBot/a4af6cb5cab50719b08017e93faa577b
Example passing output: https://gist.github.com/OpenConfigBot/75654c2ce460f11169acc06f37f1b1aa